Goto

Collaborating Authors

 lethal autonomous weapon


The Promise and Peril of AI

TIME - Tech

In early 2023, following an international conference that included dialogue with China, the United States released a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy," urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of "human control" itself is hazier than it might seem. If humans authorized a future AI system to "stop an incoming nuclear attack," how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes. We need to recognize the fact that AI technologies are inherently dual-use.


Pope calls for ban on 'lethal autonomous weapons' at G7

Al Jazeera

Pope Francis called for a ban on "lethal autonomous weapons" in an address to the G7 leaders' summit in Italy on the perils of artificial intelligence (AI). On Friday, the pontiff was the first head of the Roman Catholic Church to ever attend a Group of Seven meeting. "In light of the tragedy that is armed conflict, it is urgent to reconsider the development and use of devices like the so-called'lethal autonomous weapons' and ultimately ban their use," the pope said. "This starts from an effective and concrete commitment to introduce ever greater and proper human control. No machine should ever choose to take the life of a human being."


OpenAI chief Altman described what 'scary' AI means to him, but ChatGPT has its own examples

FOX News

OpenAI CEO Sam Altman, the artificial intelligence lab behind ChatGPT, took questions from reporters after his congressional hearing, including his definition of "scary AI." OpenAI CEO Sam Altman testified before Congress in Washington, D.C., this week about regulating artificial intelligence as well as his personal fears over the tech and what "scary" AI systems means to him. Fox News Digital asked OpenAI's wildly popular chatbot, ChatGPT, to also weigh in on examples of "scary" artificial intelligence systems, and it reported six hypothetical instances of how AI could become weaponized or have potentially harmful impacts on society. When asked by Fox News Digital on Tuesday after his testimony before a Senate Judiciary subcommittee, Altman gave examples of "scary AI" that included systems that could design "novel biological pathogens." "An AI that could hack into computer systems," he continued. "I think these are all scary. These systems can become quite powerful, which is why I was happy to be here today and why I think this is so important."


'Eyes and ears': Could drones prove decisive in the Ukraine war?

Al Jazeera

Warning: Some readers may find some of the scenes described in this article disturbing. Kyiv, Ukraine – Ivan Ukraintsev, a stern-faced insurance broker turned director of a wartime charity providing crucial aid to Ukraine's military forces, is on a mission: to help Ukraine win the drone war. He is a polite but no-nonsense character, and he is here to talk about drones. "If we [Ukraine] had enough drones, we could end this war in two months," he says firmly. Ivan, who heads up the charity Starlife, had recently returned from overseeing a drone delivery to Bakhmut, a city in eastern Ukraine that has become the focal point for months of bloody battles between Ukrainian and Russian forces. Trench warfare, pockmarked and corpse-ridden swathes of no man's land, and constant artillery bombardments have drawn comparisons to battlefield conditions during World War I.


Benefits & Risks of Artificial Intelligence - Future of Life Institute

#artificialintelligence

"Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before - as long as we manage to keep the technology beneficial." From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. However, the long-term goal of many researchers is to create general AI (AGI or strong AI).


Newspaper articles written by robots?

#artificialintelligence

Theoretical physicist Stephen Hawking, arguably one of the smartest people in history, warned, in an interview with the BBC, that, "the development of full artificial intelligence (AI) could spell the end of the human race." Hawking went on to say, at the Web Summit technology conference in Lisbon, Portugal, "AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." In 2015, dozens of brainiac scientists and technology experts, including celebrity physicists like Hawking and Elon Musk, signed a letter warning that, even though AI could be used for great good, it could also have potentially devastating, dangerous and unintended uses.


Autonomous Weapons Are Here, but the World Isn't Ready for Them

#artificialintelligence

This may be remembered as the year when the world learned that lethal autonomous weapons had moved from a futuristic worry to a battlefield reality. It's also the year when policymakers failed to agree on what to do about it. On Friday, 120 countries participating in the United Nations' Convention on Certain Conventional Weapons could not agree on whether to limit the development or use of lethal autonomous weapons. Instead, they pledged to continue and "intensify" discussions. "It's very disappointing, and a real missed opportunity," says Neil Davison, senior scientific and policy adviser at the International Committee of the Red Cross, a humanitarian organization based in Geneva.


Algorithms of war: The military plan for artificial intelligence

#artificialintelligence

At the outbreak of World War I, the French army was mobilised in the fashion of Napoleonic times. On horseback and equipped with swords, the cuirassiers wore bright tricolour uniforms topped with feathers--the same get-up as when they swept through Europe a hundred years earlier. Vast fields were filled with trenches, barbed wire, poison gas and machine gun fire--plunging the ill-equipped soldiers into a violent hellscape of industrial-scale slaughter. Only three decades after the first World War I bayonet charge across no man's land, the US was able to incinerate entire cities with a single (nuclear) bomb blast. And since the destruction of Hiroshima and Nagasaki in 1945, our rulers' methods of war have been made yet more deadly and "efficient".


It's time to ban autonomous killer robots before they become a threat

#artificialintelligence

Please use the sharing tools found via the share button at the top or side of articles. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here. The writer is professor of computer science and Smith-Zadeh professor in engineering, University of California, Berkeley The subject of autonomous killer robots exercises many technologists, politicians and human rights activists. Indeed, the Financial Times's advice page for would-be opinion writers complains that, in their pitches, "lots of people spin doomsday scenarios about robots".


Germany warns: AI arms race already underway

#artificialintelligence

An AI arms race is already underway. That's the reality we have to deal with," Maas told DW, speaking in a new DW documentary, "Future Wars -- and How to Prevent Them." "This is a race that cuts across the military and the civilian fields," said Amandeep Singh Gill, former chair of the United Nations group of governmental experts on lethal autonomous weapons. "This is a multi-trillion dollar question." This is apparent in a recent report from the United States' National Security Commission on Artificial Intelligence. It speaks of a "new warfighting paradigm" pitting "algorithms against algorithms," and urges massive investments "to continuously out-innovate potential adversaries." And you can see it in China's latest five-year plan, which places AI at the center of a relentless ramp-up in research and development, while the People's Liberation Army girds for a future of what it calls "intelligentized warfare." As Russian President Vladimir Putin put it as early as 2017, "whoever ...